Goto

Collaborating Authors

 training method


Training Quantized Nets: A Deeper Understanding

Neural Information Processing Systems

Currently, deep neural networks are deployed on low-power portable devices by first training a full-precision model using powerful hardware, and then deriving a corresponding low-precision model for efficient inference on such systems. However, training models directly with coarsely quantized weights is a key step towards learning on embedded platforms that have limited computing resources, memory capacity, and power consumption. Numerous recent publications have studied methods for training quantized networks, but these studies have mostly been empirical. In this work, we investigate training methods for quantized neural networks from a theoretical viewpoint. We first explore accuracy guarantees for training methods under convexity assumptions. We then look at the behavior of these algorithms for non-convex problems, and show that training algorithms that exploit high-precision representations have an important greedy search phase that purely quantized training methods lack, which explains the difficulty of training using low-precision arithmetic.



80f2f15983422987ea30d77bb531be86-Paper.pdf

Neural Information Processing Systems

Wethenseparate theoptimization process into two steps, corresponding to weight update and structure parameter update. For the former step, we use the conventional chain rule, which can be sparse via exploiting the sparse structure.






Boosting Learning for LDPC Codes to Improve the Error-Floor Performance

Neural Information Processing Systems

These works assume an arbitrary neural network with no prior knowledge of decoding algorithms, and accordingly, face the challenge of learning a decoding algorithm.